Reading � Gazzaniga, Cognitive sciences � language + brain

Greg Detre

Tuesday, 26 March, 2002

pg 289

Psycholinguistic theories of language

Mental lexicon

mental lexicon = mental store of information about words � includes:

semantic information = word�s meaning

syntactic information = how the words are combined to form a sentence

word forms = spelling + sound pattern

plays a central role in language

some psycholinguistic theories distinguish between input and output lexica

representation of orthographic (vision-based) and phonological (sound-based)

 

a normal adult speaker:

has passive knowledge of about 50,000 words

can recognise + produce 3 words/second without any difficulty

 

efficient organisation (unlike a dictionary):

cannot be alphabetic (would take longer to find words depending on their alphabetic position)

no fixed content

more frequently accessed words are accessed more quickly

organised as information-specific networks:

semantic network (Collins & Loftus, 1975) = words as conceptual nodes

the strength of connection + distance between nodes are determined by the semantic/associative relations between words

assumes that activation will spread mostly to closely-connected nodes

Experimental evidence

semantic network idea supported by evidence from semantic priming studies:

lexical decision task = subjects are presented with word pairs

prime = the first word

target = the second word can be a real word (related or unrelated in meaning to the prime), non-word (e.g. �sfhsi�) or pseudo-word (e.g. �fisch�)

have to decide as fast + accurately as possible whether the target is a word, and press a button

subjects are faster + more accurate at word decisions when the target is preceded by a related prime (e.g. car-truck), than unrelated

or they have to pronounce the target

again, naming latencies are smaller for related than unrelated words

post-lexical effects = happen after representation in the mental lexicon have been accessed

e.g. expectancy-induced priming might occur if the time between the presentation of primes is long and the proportion of related pairs in a list is large

subjects expect the possible target words (the expectancy set) after hearing the prime

semantic matching = when subjects actively try to match the meaning of the target word with the meaning of the prime

leads to faster �yes� responses to words that match than to words that don't match the meaning of the preceding context word

therefore, aparently word priming effects do not always result from implicit/automatic types of processes

some psycholinguistic theories consider word meaning to be part of a larger conceptual network outside the mental lexicon

other models:

concepts are represented by their semantic features/properties

e.g. dog�s semantic features are �is animate�, �has 4 legs� etc.

problem of activation: how many features have to be activated for a person to recognise a dog? how many stored? and some words are more prototypical

picture-word interference paradigm = subjects are asked to name pictures as fast + accurately as possible

soon after the picture is presented, an interfering auditory word is shown

the naming latency for �sheep� increases if the interfering stimulus is �goat�, but not if it�s �house�

Biological evidence

Some types of neurological problems

infer functional organisation of the mental lexicon from different types of neurological problems

Wernicke�s aphasia = errors in speech production (semantic paraphasias)

usually due to lesions in the posterior parts of the left hemisphere

e.g. use �horse� when they mean �cow�

deep dyslexia = similar errors in reading

progressive semantic dementia = intiially show impairments in the conceptual system alone

have difficulty assigning objects to a semantic category, and often name a category when asked to name a picture (e.g. �animal� when seeing a horse)

supports the semantic network idea because related meanings are substituted, confused or lumped together, as you would expect from a degrading information-specifying system

Warrington & McCarthy (1983, 1987):

problems localised to specific semantic categories, e.g. animals vs objects

patients who have great difficulties pointing to pictures of food or living things when presented with a word, whereas their performance with man-made objects is much better

same with naming from pictures

some patients have impaired performance with man-made objects but preserved identification of food + living things

suggested that the patients� problems are indicative of the types of information stored with different words int eh mental lexicons

e.g. biological categoreis (fruits, foods, animals etc.) rely on physical properties + visual features, man-made objects are identified by functional properties

evidence for a conceptually-organised mental lexicon

PET scans

PET reveals how dissociations in neurological patients can be identified in normal brains:

naming:

activates:

pictures of animals or tools

ventral temporal lobe bilaterally

animals

also the left medial occipital lobe (associated with the early stages of visual processing)

tools

left premotor area (activated by imagining hand movements)

 

conceptual representations of living things vs man-made tools rely on neuronal circuits engaged in processing perceptual + functional information

Damasio et al(1996)

powerful evidence for category-specific deficits (Hannah Damasio et al., 1996)

naming task (large population of patients with lesions), under three conditions:

1.     naming famous faces

2.     naming animals

3.     naming tools

if they could describe (semantic) features of the object (i.e. they recognised what it was) but could not name it, then it was scored as a naming error (in order to dissociate conceptual problems from word-retrieval problems)

a correct naming response is the same as normal subjects�

results:

30 patients � showed impairment

29 of these had a left hemisphere lesion

7 patients � deficit naming faces

5 patients � deficit naming animals

7 patients � deficit naming tools

11 patients remaining � combination of problems in word retrieval for faces/animals/tools, faces/animals, animals/tools, but never for faces/tools

could correlate naming deficits with specific regions:

word-retrieval problem(s):

lesion area:

persons

left temporal pole (TP)

animals

anterior part of the left inferior temporal (IT) lobe

tools

posterolateral part of the left IT lobe plus lateral temporo-occipito-parietal junction (IT+)

 

the same areas lit up for normal subjects naming the same types of objects

indicates that the brain has three levels of representation for word knowledge:

1.     top level � conceptual preverbal level, containing the semantic information about the word (e.g. that �bird = beak + feathers + wings�)

2.     lexical level � the word form that matches the concept is represented (e.g. �bird�)

3.     phonological level � the sound information that corresponds to the word

From written text to sound � reading aloud

how do we know how to pronounce correctly when we read aloud � two ways (dual routes):

1.     grapheme-to-sound conversion

graphemes = letters

phonemes = sounds

difficult to do in English, e.g. �ph� is pronounced different in �physiology� and �uphill�

we could be relying on rules based on the language�s regularities � but it�s difficult to come up with rules to predict all combinations of letters + sounds

2.     direct lexical route = directly from reading to pronunciation, i.e. from whole-word orthographic input to representations in the mental lexicons

evidence from patients with acquired dyslexia = reading problems due to brain damage

modality-specific deficit � patients can comprehend spoken language, and maybe even produce written language (alexia without agraphia)

two types of acquired dyslexia:

                                                    i.     deep/phonological dyslexia = cannot read aloud words that do not have a representation in the mental lexicon

i.e. they�re unable to read aloud pseudo-words like �grimp�, but have no problem reading irregular words like �broad� or difficult words like �chrysanthemum�

also make semantic errors (e.g. �rose� for �iris�)

usually perform well on rhyming tasks (with auditory stimuli), therefore no loss of phonological representation

rely exclusively on the direct route lexical route when reading

                                                  ii.     surface dyslexia = rely only on regularity rules

e.g. they over-regularise the pronunciation of regular words (e.g. read �heed� for �head�)

seem to be translating directly from letter to sound representations

these two forms of dyslexia show provide a powerful double dissociation, indicating two routes by which text we read can be converted into verbal output

Linguistic input

after considering how words are represented, the next step is identifying what leads to understanding the visual input

 

spoken word �/span>

acoustic analysis �/span>

auditory input lexicon �/span>

cognitive system

written word �/span>

orthographic analysis �/span>

orthographic input lexicon �/span>

 

Perceptual analyses

first task: identify the individual words in a spoken utterance/written text

perceptual analysis of input is a prelexical process which does not involve the mental lexicon

differences between the visual + auditory modalities:

written language is much more segmented (by spaces + punctuation) than spoken

e.g. the speech waveform of the word �captain� looks like two words because of what looks like a �silence� in the middle of the word

also, there are rarely gaps between spoken words because of coarticulation (e.g. �I dunno� or �do you mean�)

spoken input provides other clues about how to divide the speech stream into meaningful segments

prosodic information = what the listener derives from the speech rhythm and the pitch of voice

speech rhythm � varying the duration of words and placing pauses between words

very obvious when asking a question (rise towards the end of the sentence) or emphasising a PoS (raising voice and pausing after the critical part)

written input lacks prosody (so we sometimes add �mental prosody�, e.g. with a comma)

e.g. �Because the boy left the room seemed empty�

Written input

must recognise a visual pattern by analysing primitive features (i.e. the shape of the letters)

e.g. horizontal/vertical lines, closed/open curves, intersections etc.

pandemonium model for letter recognition (Selfridge, 1959)

demon = discrete stage/substage of the information-processing

sensory input is stored as an iconic memory by the �image demon�

28 feature demons decode features in the iconic representation of the sensory input

all the representations of letters with these features are activated by cognitive demons

the representation that best matches the input is selected by the decision demon

physiology of visual feature analysis:

we know something about visual feature analysis from single-cell recording, but very little about letter and word recognition at the cellular level

we can't use recordings in monkeys

Petersen et al. (1990) PET study:

areas in the human extra-striate visual cortex light up when processing visual word forms

appears to be a highly-specialised module for recognition + processing of the visual word forms in our native languages

so-called �word form area� is located in the early stages of visual process, lateralised to the left hemisphere

lesion: can give rise to pure alexia = patients cannot read words even though other aspects of language are normal

Spoken input

auditory + continuous � enormous variability in input (rate of speech, dialect, sex of speaker etc.)

cannot be a one-to-one relation between physical signal + representations in memory

cannot rewind/re-read spoken input � temporal dimension

most theories assume the first step is to translate acoustic signals into abstract featural representations (perhaps based on spectral properties of the incoming signal)

PET analysis of processing of single words:

real words presented auditorily activate the left temporoparietal cortex and anterior superior temporal regions

lesions of the nearby angular and supramarginal gyri lead to a deficit in phonological processing

Processing of words

cohort model (Marslen-Wilson & Tyler, 1980) (???) components of word/lexical processing:

1.     lexical access = output of perceptual analysis is probably projected on to word form representations in the mental lexicon

activated representations spread to semantic + syntactic attributes of the word forms

assumes that speech processing starts with the very first phoneme that the listener has identified as the onset of a word

word initial cohort = activates all potentially-correct word forms (e.g. �cap�, �capital�, �caption� etc.)

2.     lexical selection = selecting from the activated word representations for the one that best matches the total sensory input

selecting the right word form depends on the incoming sensory information and the number of competitors in the word initial cohort

a word is selected at its uniqueness point = when it is uniquely distinguishable

need to take into account syntactic + semantic properties in selecting words in a sentence

3.     lexical integration =

����������������� ???

The role of context in word recognition

two types of representations play a role in word prcessing in the context of other words:

lower-level represntations = constructed from sensory input

higher-level representations = constructed from the context preceding the word being processed

there is a question of whether linguistic context influences word processing before/after lexical access + selection

i.e. at what point the higher- and lower-level representations interact

Models of word recognition

thre types of models explain word comprehension:

modular (autonomous) models = normal language comprehension is executed within separate + independent modules

higher-level representations do not influence lower-level representations

the flow is strictly bottom up/data-driven

the representation of context information does not affect lexical access + selection

interactive models = all types of information participate in word recognition

context can influence even before sensory information is available, by changing the activations status of the word form representations in the mental lexicon

hybrid models = lexical access is autonomous (not influenced by higher-level information), but lexical selection can be influenced by sensory + higher-level context information

information about word forms possible given the preceding context is provided, thereby reducing the number of activated candidates

growing evidence that at least lexical selection is influenced by higher-level context information

duuhhh � surely the answer must be an interactive model, with only weak higher-level influence on lexical access???

Integration of words in sentences

in order to understand a message, you have to do more than just recognise individual words

you have to integrate the syntactic + semantic properties of the recognised word into a representation of the whole sentence

this integration process has to be executed in real time

syntactic analysis goes on even in the absence of real meaning

subjects are faster at detecting a target word in a sentence when it doesn't make any sense but is grammatically intact than when the grammar is locally disrupted:

baseline condition: �The maid was caerfully peeling the potatoes in the garden because during the summer the very hot kitchen is unbearable to work in� � 300 msec to press a button from the onset of the target word �kitchen�

semantically absurd but grammatically normal: �An orange dream was loudly watching the house during smelly nights because within these signs a very slow ktichen snored with crashing leaves� � response to target word slowed by 60 msec

syntax also disrupted: �An orange dream was loudly watching the house during smelly nights because within these signs a slow very kitchen snored with crashing leaves� � response further slowed by 45 msec

 

agrammatic aphasia = deficits in the ability to process syntactic information

generally produce two- or three-word setences that consist exclusively of content words and hardly any function words, grammatical or morophological markers and inflections

function words = use to mark a phrase, e.g. �and then�

great difficulty understanding complex syntactic structures

usually associated with lessions of Broca�s area of the left hemisphere, but much variability has been found

 

sentence = linear arrangement of phrases + words

can be represented by a hierarchical tree reflecting the sentence structure

e.g. �The little old lady bit the gigantic dog�, �The spy saw the cop with binoculars� etc.

meaning changes with syntactic structure

syntactic nodes = mental representations of the tree�s components

 

garden-path model (Frazier et al., 1987) = theory of syntactic structure based on sentences� preferred interpretation

garden-path effects are when you�re led to believe something that seems correct at first but is not

syntactic analysis/parsing is initially based on structural information only (i.e. autonomous), and not based on other linguistic processes at all

other interactive models assume that the semantic information ican immediateliy interact with syntactic information in sentence comprehension

economy principle = assumes that we process syntactic information in order to minimise what we have to do, in order to meet demanding time pressures

two mechanisms:

1.     minimal attachment = tries to produce a structure where the minimum number of aditional syntactic nodes must be computed

2.     late closure = tries to assign incoming words to the syntactic phrase or clause currently being processed

with a garden-path sentence, the preferred interpretation leads to a wrong solution, and so must be reanalysed

Speech production

Levelt (1989, 1993) model for language production:

comprehension starts with spoken/written input that has to be trasnformed into a concept, whereas production starts with a concept for which we have to find the appropriate words

input for comprehension can come from internal �speech�

steps:

1.     prepare the message � two crucial aspects:

a)     macro-planning = determining what you want to express

this communicative intention is planned in goals + sub-goals expressed in an order that best serves the communicative plan

b)     micro-planning = how the information is expressed, which means taking perspective (�the house is next to the park� vs �the park is next to the house�)

determines word choice and the grammatical role the words play (e.g. subject, object, theme)

conceptual message = the output of the macro-planning + micro-planning

constitutes the input to the formulator

2.     formulator = puts the message in a grammatically + phonologically correct form

grammatical encoding = message�s surface structure is computed

i.e. its syntactic but not conceptual representation

three levels in Levelt�s mental lexicon model (see Damasio�s model):

1.     conceptual level

e.g. �sheep� contains �grows wool�, �gives milk�, �is an animal� etc.

2.     lemma level

e.g. �sheep� � also �mouton�, noun, gender

lemmas = lowest-level elements of the surface structure

stored in the mental lexicon as a semantic network

contain information about the word�s syntactic properties and semantic specifications

syntactic properties = part of speech, gender etc.

semantic specifications = the conceptual conditions where it is appropriate to use a certain word

3.     lexeme or sound level

when the subject is presented with picture (e.g. of a sheep):

concept representing �sheep� is activated, along with concepts for �goat�, �wool�, �milk� etc.

activated concepts activate lemma nodes (e.g. �sheep�/�mouton�, �goat�/�chevre� etc.)

the right lemma is chosen (lexical selection)

this lemma activates the lexeme (= sound form) = phonological encoding

if you can't activate the sound form, you get the �tip of the tongue� feeling

plan our articulation: syllables are mapped onto motor patterns that move the tongue, mouth and vocal apparatus

can repair errors in our speech at this staage, e.g. by saying �um� to give ourselves time

brain damage can affect any of these processing stages:

anomic patients = deficit in naming

extreme tip of the tongue state

can't name a picture, but can often give you an accurate description

this is not a problem of articulation, because they can readily repeat the word aloud

it is a problem on the lexeme level

semantic paraphasias (often accompanies Wernicke�s aphasia) = produce related words

could be a problem of inappropriate selection of concepts, lemmas or lexemes

might even be a problem at the phoneme level of incorrectly substituting one word for another

dysarthria (often accompanies Broca�s aphasia) = hinders articulation and results in effortful speech

because the muscles to articularte the utterance can't be controlled

Language and brain

struggled with the concept of localisation

unable to describe how detailed psycholinguistically defined functions map directly onto neuroanatomy

clues from aphasia

Aphasia

aphasia = deficit in language comprehension + production that accompanies neurological damage

very common

40% of strokes produce some aphasia

sometimes only temporarily during the acute period of the first few months

primary aphasia = problems with the language-processing mechanisms themselves

secondary aphasia = resulting from memory impairments, attention disorders, or perceptual problems

Historical findings in aphaia

Broca�s patient (�tan�) � lesion in the posterior portion of the left inferior frontal gyrus

Broca studied other patients with language disorders � right-handed, and generally had right hemiparesis (weakness of the right arm + leg)

Broca�s area = inferior-lateral left frontal lobe

expressive aphasia = difficulties in producing speech

 

Wernicke�s two patients had problems understanding spoken language after a stroke

fluent speech, but nonsensical sounds, words + sentences

lesion in the posterior regions of the superior temporal gyrus

near the auditory area, which is in the superior temopral region of Heschl�s gyrus

Wernicke proposed that the language comprehension problems arose due to lost word-related memories, and the nonsense speech from patietns� inability to monitor their own output

posterior inferior-lateral left parietal and supratemporal areas

receptive aphasia = hampered comprehension of language

arcuate fasciculus = bundle of axons originating in Wernicke�s area, goes through the angular gyrus and terminates in Broca�s area

A simple model of language organisation

emphasised word-level analysis, leading to the classic localisationalist view/connectionist model of language

re-popularised by Geschwind (1967), first described by Lichtheim (1885)

three interacting brain centres:

production/motor area (Broca�s)

comprehension/auditory area (Wernicke�s)

conceptual area

predicts seven possible aphasias � lesions in the production or comprehension area, or any of the tracts to/from the three areas

 

auditory

�/span>

phonological lexicon

�/span>

conceptual area

 

 

�/span>�/span>

 

motor

�/span>

speech planning + programming

�/span>

 

problems:

difficult to observe pure forms of the aphasias, and so it�s difficult to know how distinct they really are

before neuroimaging, relied on poor lesion localisation, sometimes autopsies, or guesses based on co-occurrence of othe rmore well-defined symptoms (e.g. hemiparesis)

great variability in how lesions are defined in autopsy as well as neuroimaging data

great variability in the lesions themselves, e.g. sometimes anterior lesions produce Wernicke�s aphasia

many patients fall into more than one category

Classification of aphasia

three main test parameters for language disorders:

1.     spontaneous speech

2.     auditory comprehension

3.     verbal repetition

Broca�s aphasia

sometimes the ability to sing normally, recite phrases + prose or count are left undisturbed

speech: often telegraphic + effortful, coming in bursts

dysarthria = loss of control over articulatory muscles

speech apraxia = deficits in the ability to program articulations

also have comprehension deficits, though they are not devoid of grammatical knowledge

Broca�s area

however, Dronkers (1996) reported that only 10/22 patients with lesions in Broca�s area had Broca�s aphasia

The regions classically defined as Broca�s area are generally limited to the cortical gray mater of the pars opercularis and pars triangularis in the posterior inferior frontal gyrus of the left hemisphere. The involvement of underlying white matter, cortex and subcortical structures may clarify the role of Broca�s area in language disorders. By the turn of the last century, scientists were proposing that structures deep to Broca�s area were responsible for the deficits of Broca�s aphasia. Brain areas including the insular cortex, the lenticular nucleus of the basal ganglia, and fibres of passage have been implicated in Broca�s aphasia. Recent research by Dronkers found that all patients with Broca�s aphasia, including those studied by autopsy or neuorimaging, have damaged insulae (including Broca�s original patient)

Wernicke�s aphasia

primarily a disorder of language comprehension, and cannot speak meaningful sentences

Wernicke�s area

Wernicke�s area includes the posterior third of the superior temporal gyrus. However, language comprehension deficits also arise from damage to the junction between the parietal and temporal lobes, including the supramarginal and angular gyri

in one study of 70 patients with Wernicke�s aphasia, about 10% had damage confined to regions outside of Wernicke�s area

and the aphasia of some patients with damage to Wernicke�s area may improve over time, and come to comprehend more

dense and persistent Wernicke�s aphasia is assured only if there is damage in Wernicke�s area and in the cortex of the posterior temporal lobe � or damage to the underlying white matter that connects temporal lobe language areas to other brain areas

thus Wernicke�s area remains in the centre of a posterior region whose functioning is required for normal comprehension. Lesions confined to Wernicke�s area lead to only temporary Wernicke�s aphasia, because the damage to this area does not actually cause the syndrome; instead, secondary damage due to tissue swelling contributes to the most severe problems

Dronkers suggests that white matter underlying Wernicke�s area may be key

The man without nouns

damage to Wernicke�s area or surrounding cortex can sometimes produce anomia = the inability to name things in the world (the words that label objects)

comprehension can be intact and speech unaffected

H.W. = an intelligent businessman, studied by Baynes

stroke in his left hemisphere (large lesion, including large regions of the posterior language areas), left with anomia and almost no other deficits of note (except a slight right-side hemiparesis and slight deficit in face recognition)

Damage to connections between language areas

predictions from the Wernicke-Lichtheim-Geschwind model about damaged connections between areas:

conduction aphasia = damage to fibres projecting from Wernicke�s to Broca�s areas (= the arcuate fasciculus)

problems producing spontaneous speech as well as repeating speech, and sometimes use words incorrectly

can understand words that they hear/see and can hear their own spech errors, but cannot repair them

similar symptoms happen with lesions to the insula + portions of the auditory cortex

this could be because it is not so much Broca�s and Wernicke�s areas themselves after all that underlie Broca�s and Wernicke�s aphasias

damage to connections between the conceptual representation area(s) and Wernicke�s area

conceptual representation area(s) = perhaps the supramarginal and angular gyri

predicts impaired ability to comprehend spoken inputs but not the ability to repeat what was heard (= transcortical sensory aphasia)

lesions in the supramrginal + angular gyri � display this

also, have the unique ability to repeat what they heard and to correct grammatical errors in what they heard when they repeat it

strong evidence that this aphasia comes from losing the ability to access semantic information without losing syntactic/phonological abilities

The nature of aphasic deficits

pivotal concern in aphasia: do the comprehension deficits in aphasic patients result from losses of stored linguistic information or from disruption of computational language processes that act on linguistic inputs?

surely the two are indistinguishable in a connectionist system and the question is ill-posed???

sentence picture task: Broca�s aphasics with severe problems in syntactic understanding were often unable to point to a picture that matched the meaning of the sentence they had to read, especially with more complex sentence constructions

however, they performed normally when they had to distinguish syntactically well-formed from incorrect sentences in a grammar judgement task, even with numerous syntactic structures

so they seem to be able to access + exploit structural knowledge sometimes

variable performance across linguistic task supports the notion that agrammatic comprehension results from a processing rather than a representational deficit � they still have syntactic knowledge, but sometimes they can't use it

lexical decision task: Wernicke�s aphasics consistently displayed the normal pattern of priming for semantically or associatively related words pairs, even though they demonstrated severe impairment when asked to make judgements on word meaning

thus, despite problems in comprehension, when lexical-semantic processing is tested in an implicit test (e.g. lexical decision), patients do not show impairment

so lexical-semantic knowledge might still be preserved, but unaccessible

 

thus not all aphasic comprehension deficits are due to losses of stored linguistic information

patients that display this variability in performance may simply have trouble performing the tasks under real time pressure

or simply that the different tasks employ subtly different processing, or provide different inputs (i.e. structure the input information differently) to the processing areas in some way???

 

central question about the processing impairment model of comprehension:

when is the impairment manifested during language processing? two suggestions:

1.     lexical access is affected in aphasic comprehenders (i.e. Broca�s aphasics)

word priming experiments: Broca�s aphasics lack semantic priming in lexical decision tasks, in contrast with normal (+ Wernicke�s) subjects who show faster reaction times to words related to the prime

this is not decisive because:

                                                    i.     Broca�s aphasics do show semantic priming in other semantic priming studies

                                                  ii.     when the time between prime onset and target onset (stimulus onset asynchrony � SOA) was short enough (<200 msec) to exclude post-lexical processes, semantic priming was revealed

2.     automatic lexical access might be largely intact but problems arise later, during lexical integration

i.e. that patients have difficulty integrating the meaning of words into the context established by preceding words in the sentence (quickly enough for normal comprehension)

Does the right hemisphere understand language?

left hemisphere is dominant for language procesing

but the right hemisphere is not completely unable to understand language

split-brain patients:

their hemispheres can no longer communicate with each other at the cortical level

visual information presented to the left visual hemisphere goes excluvely to the right hemisphere, so perceptual input to the right hemisphere does not reach the left hemisphere�s language areas

the right hemisphere can make simple semantic judgements and can read, but only gramatically simple sentences

patients with lesions in their right hemisphere:

generally non-aphasic, but they do have subtle language deficits

Hagoort et al. (1996): normal priming effects for words that are associatively related (e.g. �cottage cheese�)

but do not show priming effects for semantically related words (e.g. �dog� and �horse�)

perhaps indicates that the left hemisphere is less good than the right at processing distant semantic relations

Chiarello (1991): found a left visual field/right hemisphere advantage for processing words that come fthe same semantic category but have no associative relation

Neurophysiology of language

evidence from:

functional neuroimaging, stimulation of the human cortex during surgery, language brain potentials (event-related potentials) from normals and aphasics

Functional neuroimaging of language

use PET in two main ways:

1.     to study language organisation by passively investigating the metabolic correlates of language disorders during rest

e.g. find out which areas of the brain might have a lower metabolism after a stroke

the 18F-deoxyglucose method measures neuronal metabolism by using radioactively labeled glucose analogues that are taken up by active cells + trapped within them

the radioactivity can then be measured

advantage: learn about the effects of lesions that extend beyond the obviously damaged region

anatomical imaging with CT or MRI cannot detect many forms of functional lesions (i.e. regions of lowered metabolism or hypometabolism)

2.     blood flow activation studies in healthy volunteers

Metabolic correlates of aphasia

remote metabolic changes are typically observed in stroke victims who have observable symptoms, but not when the patient has no neurological symptoms

hypometabolism = lower glucose utilisation

functionally lesioned areas = connected to damaged areas, and so experience changes in activity and so hence possible function, but are not themselves damaged, though they may contribute to behavioural deficits

in aphasic patients, PET revealed hypometabolism in the temporoparietal region in 100% of the patients, regardless of aphasia type

of these, 97% had metabolic changes in the angular gyrus and 89% in the supramarginal gyrus

87% showed hypometabolism in the superior temporal gyrus

these PET-defined hypometabolic regions were then compared to the anatomically defined lesions

67% had parietal lesions

67% had damage in Wernicke�s area

58% had damage in the posterior middle temporal region

even when the anatomically-defined damage did not include the supramarginal and angular gyri, they had reduced resting neuronal metabolism

this shows that correlating behavioural deficits with only visible anatomical lesions almost certainly does not provide a complete picture

Broca�s aphasics, with anatomical damage to Broca�s area and perhaps surrounding tissue, have marked hypometabolism in the prefrontal cortex

this effect is reduced in Wernicke�s aphasia and even further diminsihed in conduction aphasia

Activation of Broca�s area as measured by PET

during speech production:

areas in and around Broca�s area are activated

also activates the motor cortex area representing mouth + lips, the supplementary motor area

supports the idea that Broca�s area is active in at least the motoric aspects of speech production

Broca�s area is also activated by other language-related tasks,

e.g. deciding whether two nonsense syllables end in the same consonant, listening to words/stories in a native versus unknown language, understanding syntactically complex sentences � indicates a role for Broca�s area in grammatical processes

however, it may be rather that Broca�s area participates in the phonological loop in which linguistic information is held in a short-term buffer as part of the working memory system � this ifts well with the articulatory role proposed for it

an altnerative candidate for grammatical processing has been identified by PET as being in the anterior portions of the superior temporal gyrus, near area 22

Dronkers also implicated this area (the anterior superior temporal lobe) in aphasics� grammatical processing deficits

Activation of Wernicke�s area as measured by PET

very difficult to activate the posterior language areas with PET

activations of the posterior superior temporal gyrus during tasks that require the discriminatino of words while listening to speech, i.e. it may participate in perceptual analyses of auditory speech

with tasks requriing more complex grammatical/semantic analysis (as with comprehension), pET revealed activations generally in the posterior temporal lobe outside the superior temporal area or in the anterior portion of the superior temporal gyrus

though lesions to the temporoparietal area are associated with aphasic symptoms, PET activations in normals provide little evidence for these areas� participation in language

this striking dichotomy between PET and aphasia findings is perplexing

in summary: functional neuroimaging of normals points to new language areas that should be investigated

Electrophysiology of language

ERP components = brain waves

ERPs to index aspects of semantic and syntactic processing during language comprehension

Semantic processing and the N400 wave

N400 is a brain wave related to linguistic processes

named N400 because it is a negative polarity voltage peak in brain waves for words that usually reach maximum amplitude around 400 msec after onset of the word stimulus

it�s especially sensitive to semantic aspects of linguistic input

discovered by Kutas & Hillyard (1980):

sentences were presented on a computer screen, one word at a time

the EEGs (electroencephalograms) were averaged for the sentences in each condition, and the ERPs were extracted by averaging data for the last word of the sentences separately for each sentence type

they compared the processing of the last word of sentences in three conditions:

1.     normal sentences that ended with a word congruent with the preceding context

e.g. �It was his first day at work�

2.     sentences that ended with a word anomalous to the preceding context

e.g. �He spread the warm bread with socks�

the amplitude of the N400 to the anomalous words ending the sentence was increased when compared to that of the N400 to congruent words (= the N400 effect)

3.     sentences that ended with a word semantically congruent with the preceding context but physically deviant

e.g. �She put on her high-heeled SHOES�, or larger font

in contrast, the semantically congruent but physically deviant words elicited a positive potential (P560) rather than an N400

non-semantic deviations like musical or grammatical violations fail to elicit the N400 effect, i.e. the N400 is specific to semantic analysis

N400 effects are modality independent � they happen for both reading + auditory input (for speakers of English, Dutch, French and Japanese, and for ASL with congenitally deaf signers)

the N400 reflects primarily post-lexical processes involved in lexical integration

Syntactic processing and event-related potentials

much less work has been done on electrophysiological correlates of other linguistic analysis such as syntax than on the N400

one ERP component that has shown up is the syntactic positive shift (SPS = a large positive component elicited by words after a syntactic violation

 

Hagoort, Brown and Groothusen (1993):

asked subjects to silently read sentences presented one word at a time on a screen

they compared brain responses to normal sentences with those to sentences containing a grammatical violation

e.g. �The spoiled child throws/*throw the toys on the floor� (in Dutch)

a positive shift emerges in the ERP about 600 msec after the syntactic violation (an SPS)

 

M�1993) found a negative wave over the left frontal areas of the brain = syntactic anterior negativity (SAN)

the N400 and SAN have different scalp topographies, implying that they are generated in different neural structures in the brain

Stimulation mapping of the human brain

Ojemann et al. (1989):

electrodes are used to pass a small electrical current through the cortex (direct cortical stimulation) of an awake patient, momentarily disrupting activity, thus probing where a language process is localised (since it varies among patients) in order to leave critical language areas intact when removing epileptic areas

patients are shown line drawings of everyday objects and are asked to name those objects

during naming, regions of the left perisylvian cortex are stimulated with low amounts of electricity

when the patient makes an error in naming, or is unable to name the object, the deficit is correlated with the region being stimulated during that tiral

stimulation of 100-200 patients revealed that:

aspects of language representation in the brain are organised in mosaic-like areas of 1-2cm2

these mosaic usually include regions in the frontal + posterior temporal areas

however, in some patients, only frontal or posterior areas were observed

the correlation tewen these effects in either Broca�s or Wernicke�s areas was weak � some patients had naming disruption in the classic areas and others did not

anatomical localisations varied widely across patietns � this has implications for how across-subject averagin methods, such as PET activation studies, reveal signfiicant effects

Aphasia and electrophysiology

aphasia: processing or representation losses?

investigate the processing of spoken language and observe how the brain responds to linguistic inputs in normals vs aphasics

Swaab, Brown and Hagoort (1997):

tried to determine whether spoken-sentence comprehension might be hampered by a deficit in the online integration of lexical information

investigated spoken-sentence understanding in Broca�s and Wernicke�s aphasics vs normal, age-matched controls

two conditions:

1.     in half the sentences, the meaning of the final word matched the rest of the semantic context

2.     in the other half, the last word was semantically anomalous

expected the N400 amplitude to be larger for semantically anomalous words

non-aphasic, right hemisphere-damaged controls and aphasics with light comprehension deficit had an N400 like that of normals

aphasics with moderate to severe comprehension deficits had a reduced + delayed N400 effect

these results are compatible with the idea that low-comprehending aphasics have an impaired ability to integrate lexical information into a higher-order representation of the sentence ocntext because the N400 component indexes the process of lexical integration

electrophysiological data is useful because it�s realtime, and can provide measures of processing in patients who neurobehavioural deficit is too severe to use behaviour alone, because their comprehension is too low to understand task instructions

Summary (pg 321)

long-term question in psycholinguistics is whether semantic and syntactic processes are modular or interactive

these examples of clearly defined electrical activity linked to semantic and syntactic processing seems to indicate some modularity, right???

Questions

what�s the difference between expectancy priming and semantic matching??? is it that semantic matching is a subset of expectancy, or that it�s active???

is �lettuce� a single syllable( pg 296)???

where is the extra-striate cortex???

have there been any comparisons of animal language and agrammatic aphasia???

what does the right hemisphere equivalent of Broca�s area do???

to what extent is Levelt�s language production model Chomskyan???

in the picture-word interference task, when it says that �an interfering auditory word is shown�, does it mean an auditory stimulus???

what�s lexical integration???

what�s a lexeme??? vs a phoneme in Levelt�s model???

at what level in Levelt�s model are �production errors�???

I�m surprised they�re still sticking to the incredibly (misleadingly) broad categories of aphasia, especially with the connotations of the original names

is it only the cortices of the hemispheres that are cut off by commisurotomies etc.???

are strokes usually on just one side, localised??? what happens often to right-hemisphere stroke victims???

anatomical imaging with CT or MRI cannot detect many forms of functional lesions (i.e. regions of lowered metabolism or hypometabolism) � why is PET better for this???

what�s an ERP??? is it localised to an area??? what does it tell us???

could there be sub-cortical language processing???